29 research outputs found

    Detection of Macula and Recognition of Aged-Related Macular Degeneration in Retinal Fundus Images

    Get PDF
    In aged people, the central vision is affected by Age-Related Macular Degeneration (AMD). From the digital retinal fundus images, AMD can be recognized because of the existence of Drusen, Choroidal Neovascularization (CNV), and Geographic Atrophy (GA). It is time-consuming and costly for the ophthalmologists to monitor fundus images. A monitoring system for automated digital fundus photography can reduce these problems. In this paper, we propose a new macula detection system based on contrast enhancement, top-hat transformation, and the modified Kirsch template method. Firstly, the retinal fundus image is processed through an image enhancement method so that the intensity distribution is improved for finer visualization. The contrast-enhanced image is further improved using the top-hat transformation function to make the intensities level differentiable between the macula and different sections of images. The retinal vessel is enhanced by employing the modified Kirsch's template method. It enhances the vasculature structures and suppresses the blob-like structures. Furthermore, the OTSU thresholding is used to segment out the dark regions and separate the vessel to extract the candidate regions. The dark region and the background estimated image are subtracted from the extracted blood vessels image to obtain the exact location of the macula. The proposed method applied on 1349 images of STARE, DRIVE, MESSIDOR, and DIARETDB1 databases and achieved the average sensitivity, specificity, accuracy, positive predicted value, F1 score, and area under curve of 97.79 %, 97.65 %, 97.60 %, 97.38 %, 97.57 %, and 96.97 %, respectively. Experimental results reveal that the proposed method attains better performance, in terms of visual quality and enriched quantitative analysis, in comparison with eminent state-of-the-art methods

    An Ensemble Learning Model for COVID-19 Detection from Blood Test Samples

    Get PDF
    Current research endeavors in the application of artificial intelligence (AI) methods in the diagnosis of the COVID-19 disease has proven indispensable with very promising results. Despite these promising results, there are still limitations in real-time detection of COVID-19 using reverse transcription polymerase chain reaction (RT-PCR) test data, such as limited datasets, imbalance classes, a high misclassification rate of models, and the need for specialized research in identifying the best features and thus improving prediction rates. This study aims to investigate and apply the ensemble learning approach to develop prediction models for effective detection of COVID-19 using routine laboratory blood test results. Hence, an ensemble machine learning-based COVID-19 detection system is presented, aiming to aid clinicians to diagnose this virus effectively. The experiment was conducted using custom convolutional neural network (CNN) models as a first-stage classifier and 15 supervised machine learning algorithms as a second-stage classifier: K-Nearest Neighbors, Support Vector Machine (Linear and RBF), Naive Bayes, Decision Tree, Random Forest, MultiLayer Perceptron, AdaBoost, ExtraTrees, Logistic Regression, Linear and Quadratic Discriminant Analysis (LDA/QDA), Passive, Ridge, and Stochastic Gradient Descent Classifier. Our findings show that an ensemble learning model based on DNN and ExtraTrees achieved a mean accuracy of 99.28% and area under curve (AUC) of 99.4%, while AdaBoost gave a mean accuracy of 99.28% and AUC of 98.8% on the San Raffaele Hospital dataset, respectively. The comparison of the proposed COVID-19 detection approach with other state-of-the-art approaches using the same dataset shows that the proposed method outperforms several other COVID-19 diagnostics methods.publishedVersio

    Pareto Optimized Large Mask Approach for Efficient and Background Humanoid Shape Removal

    Get PDF
    The purpose of automated video object removal is to not only detect and remove the object of interest automatically, but also to utilize background context to inpaint the foreground area. Video inpainting requires to fill spatiotemporal gaps in a video with convincing material, necessitating both temporal and spatial consistency; the inpainted part must seamlessly integrate into the background in a variety of scenes, and it must maintain a consistent appearance in subsequent frames even if its surroundings change noticeably. We introduce deep learning-based methodology for removing unwanted human-like shapes in videos. The method uses Pareto-optimized Generative Adversarial Networks (GANs) technology, which is a novel contribution. The system automatically selects the Region of Interest (ROI) for each humanoid shape and uses a skeleton detection module to determine which humanoid shape to retain. The semantic masks of human like shapes are created using a semantic-aware occlusion-robust model that has four primary components: feature extraction, and local, global, and semantic branches. The global branch encodes occlusion-aware information to make the extracted features resistant to occlusion, while the local branch retrieves fine-grained local characteristics. A modified big mask inpainting approach is employed to eliminate a person from the image, leveraging Fast Fourier convolutions and utilizing polygonal chains and rectangles with unpredictable aspect ratios. The inpainter network takes the input image and the mask to create an output image excluding the background humanoid shapes. The generator uses an encoder-decoder structure with included skip connections to recover spatial information and dilated convolution and squeeze and excitation blocks to make the regions behind the humanoid shapes consistent with their surroundings. The discriminator avoids dissimilar structure at the patch scale, and the refiner network catches features around the boundaries of each background humanoid shape. The efficiency was assessed using the Structural Learned Perceptual Image Patch Similarity, Frechet Inception Distance, and Similarity Index Measure metrics and showed promising results in fully automated background person removal task. The method is evaluated on two video object segmentation datasets (DAVIS indicating respective values of 0.02, FID of 5.01 and SSIM of 0.79 and YouTube-VOS, resulting in 0.03, 6.22, 0.78 respectively) as well a database of 66 distinct video sequences of people behind a desk in an office environment (0.02, 4.01, and 0.78 respectively).publishedVersio

    The Model for Learning Objects Design Based on Semantic Technologies

    Get PDF
    The paper presents a comparison of state of the art methods and techniques on implementation of learning objects (LO) in the field of information and communication technologies (ICT) using semantic web services for e-learning. The web can serve as a perfect technological environment for individualized learning which is often based on interactive learning objects. This allows learners to be uniquely identified, content to be specifically personalized, and, as a result, a learner’s progress can be monitored, supported, and assessed. While a range of technological solutions for the development of integrated e-learning environments already exists, the most appropriate solutions require further improvement on implementation of novel learning objects, unification of standardization and integration of learning environments based on semantic web services (SWS) that are still in the early stages of development. This paper introduces a proprietary architectural model for distributed e-learning environments based on semantic web services (SWS), enabling the implementation of a successive learning process by developing innovative learning objects based on modern learning methods. A successful technical implementation of our approach in the environment of Kaunas University of Technology is further detailed and evaluated

    Medical Internet-of-Things Based Breast Cancer Diagnosis Using Hyperparameter-Optimized Neural Networks

    Get PDF
    In today’s healthcare setting, the accurate and timely diagnosis of breast cancer is critical for recovery and treatment in the early stages. In recent years, the Internet of Things (IoT) has experienced a transformation that allows the analysis of real-time and historical data using artificial intelligence (AI) and machine learning (ML) approaches. Medical IoT combines medical devices and AI applications with healthcare infrastructure to support medical diagnostics. The current state-of-the-art approach fails to diagnose breast cancer in its initial period, resulting in the death of most women. As a result, medical professionals and researchers are faced with a tremendous problem in early breast cancer detection. We propose a medical IoT-based diagnostic system that competently identifies malignant and benign people in an IoT environment to resolve the difficulty of identifying early-stage breast cancer. The artificial neural network (ANN) and convolutional neural network (CNN) with hyperparameter optimization are used for malignant vs. benign classification, while the Support Vector Machine (SVM) and Multilayer Perceptron (MLP) were utilized as baseline classifiers for comparison. Hyperparameters are important for machine learning algorithms since they directly control the behaviors of training algorithms and have a significant effect on the performance of machine learning models. We employ a particle swarm optimization (PSO) feature selection approach to select more satisfactory features from the breast cancer dataset to enhance the classification performance using MLP and SVM, while grid-based search was used to find the best combination of the hyperparameters of the CNN and ANN models. The Wisconsin Diagnostic Breast Cancer (WDBC) dataset was used to test the proposed approach. The proposed model got a classification accuracy of 98.5% using CNN, and 99.2% using ANN.publishedVersio

    Malignant skin melanoma detection using image augmentation by oversampling in nonlinear lower-dimensional embedding manifold

    Get PDF
    The continuous rise in skin cancer cases, especially in malignant melanoma, has resulted in a high mortality rate of the affected patients due to late detection. Some challenges affecting the success of skin cancer detection include small datasets or data scarcity problem, noisy data, imbalanced data, inconsistency in image sizes and resolutions, unavailability of data, reliability of labeled data (ground truth), and imbalance of skin cancer datasets. This study presents a novel data augmentation technique based on covariant Synthetic Minority Oversampling Technique (SMOTE) to address the data scarcity and class imbalance problem. We propose an improved data augmentation model for effective detection of melanoma skin cancer. Our method is based on data oversampling in a nonlinear lower-dimensional embedding manifold for creating synthetic melanoma images. The proposed data augmentation technique is used to generate a new skin melanoma dataset using dermoscopic images from the publicly available P H2 dataset. The augmented images were used to train the SqueezeNet deep learning model. The experimental results in binary classification scenario show a significant improvement in detection of melanoma with respect to accuracy (92.18%), sensitivity (80.77%), specificity (95.1%), and F1-score (80.84%). We also improved the multiclass classification results in melanoma detection to 89.2% (sensitivity), 96.2% (specificity) for atypical nevus detection, 65.4% (sensitivity), 72.2% (specificity), and for common nevus detection 66% (sensitivity), 77.2% (specificity). The proposed classification framework outperforms some of the state-of-the-art methods in detecting skin melanoma.publishedVersio

    Are you ashamed? Can a gaze tracker tell?

    Get PDF
    Our aim was to determine the possibility of detecting cognitive emotion information (neutral, disgust, shameful, “sensory pleasure”) by using a remote eye tracker within an approximate range of 1 meter. Our implementation was based on a self-learning ANN used for profile building, emotion status identification and recognition. Participants of the experiment were provoked with audiovisual stimuli (videos with sounds) to measure the emotional feedback. The proposed system was able to classify each felt emotion with an average of 90% accuracy (2 second measuring interval)

    Гібридний лiдарный/радарний механізм глибинного навчання та розпізнавання транспортних засобів для управління передаварійною безпекою автономних транспортних засобів

    Get PDF
    PreCrash problem of Intelligent Control of autonomous vehicles robot is a very complex problem, especially vehicle pre-crash scenarios and at points of intersections in real-time environments.The goal of this research is to develop a new artificial intelligent adaptive controller for autonomous vehicle Pre-Crash system along with vehicle recognition module and tested in MATLAB including some detailed modules. Following tasks were set: finding Objects in sensor Data (LiDAR. RADAR), Speed and Steering control, vehicle Recognition using convolution neural network and Alexnet.In this research paper, we implemented a real-time image/Lidar processing. At the beginning, we presented a real-time system which is composed of comprehensive modules, these modules are 3d object detection, object clustering and search, ground removal, deep learning using convolutional neural networks. Starting with nearest vehicle  module our target is to find the nearest ahead car and consider it as our primary obstacle.This paper presents an Adaptive cruise pre-crash system and vehicle recognition. The Adaptive cruise pre-crash system module depends on Deep Learning and LiDAR sensor data, which meant to control the driver reckless behavior on the road by adjusting the vehicle speed to maintain a safe distance from objects ahead (such as cars, humans, bicycle or whatever the object) when the driver tries to raise speed. At the very moment the vehicle recognition module, detects and recognizes the vehicles surrounding to the car.Задача предаварийного интеллектуального управления робота автономных транспортных средств является очень сложной проблемой, особенно предаварийные условия транспортных средств и в точках пересечения в условиях реального времени.Целью данного исследования является разработка нового искусственного интеллектуального адаптивного регулятора для системы предаварийной безопасности автономных транспортных средств, а также модуля распознавания транспортных средств и тестирование в MATLAB, включая некоторые детализированные модули. Были поставлены следующие задачи: поиск объектов по данным датчиков (Лидар, Радар), контроль скорости и рулевого управления, распознавание транспортных средств с использованием сверточной нейронной сети и Alexnet.В данной исследовательской работе мы реализовали обработку изображений и лидарных данных в режиме реального времени. Вначале мы представили систему реального времени, которая состоит из комплексных модулей, а именно модули обнаружения трехмерных объектов, группирования и поиска объектов, удаления земли, глубинного обучения с использованием сверточных нейронных сетей. Начиная с модуля ближайшего транспортного средства, наша задача - найти ближайший впереди идущий автомобиль и считать его основным препятствием.В статье представлена адаптивная предаварийная система управления скоростью и распознавания транспортных средств. Модуль адаптивной предаварийной системы управления скоростью зависит от данных глубинного обучения и лидарного датчика, которые предназначены для управления безрассудным поведением водителя на дороге путем регулирования скорости транспортного средства для поддержания безопасного расстояния от объектов впереди (таких как автомобили, люди, велосипед или любой другой объект), когда водитель пытается повысить скорость. В настоящий момент модуль распознавания транспортных средств обнаруживает и распознает транспортные средства вокруг автомобиляЗавдання передаварійного інтелектуального керування робота автономних транспортних засобів є дуже складною проблемою, особливо передаварійні умови транспортних засобів і в точках перетину в умовах реального часу.Метою даного дослідження є розробка нового штучного інтелектуального адаптивного регулятора для системи передаварійної безпеки автономних транспортних засобів, а також модуля розпізнавання транспортних засобів та тестування в MATLAB, включаючи деякі деталізовані модулі. Були поставлені наступні завдання: пошук об'єктів за даними датчиків (Лiдар, Радар), контроль швидкості та рульового управління, розпізнавання транспортних засобів з використанням згорткової нейронної мережі та Alexnet.У даній дослідницькій роботі ми реалізували обробку зображень та лiдарних даних в режимі реального часу. Спочатку ми представили систему реального часу, яка складається з комплексних модулів, а саме модулі виявлення тривимірних об'єктів, групування та пошуку об'єктів, видалення землі, глибинного навчання з використанням згорткових нейронних мереж. Починаючи з модуля найближчого транспортного засобу, наше завдання - знайти найближчий попереду автомобіль і вважати його основною перешкодою.У статті представлена адаптивна передаварiйна система керування швидкістю та розпізнавання транспортних засобів. Модуль адаптивної передаварійної системи керування швидкістю залежить від даних глибинного навчання та лідарного датчика, які призначені для управління безрозсудною поведінкою водія на дорозі шляхом регулювання швидкості транспортного засобу для підтримки безпечної відстані від об'єктів попереду (таких як автомобілі, люди, велосипед або будь-який інший об'єкт), коли водій намагається підвищити швидкість. Наразi модуль розпізнавання транспортних засобів виявляє і розпізнає транспортні засоби навколо автомобіл
    corecore